perm filename ELABOR[W88,JMC]2 blob sn#853517 filedate 1988-02-17 generic text, type C, neo UTF8
COMMENT āŠ—   VALID 00006 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00002 00002	%elabor[w88,jmc]		Elaboration tolerance
C00003 00003	\section{Introduction}
C00009 00004	\section{Examples of Requirements for Elaboration Tolerance}
C00010 00005	\smallskip\centerline{Copyright \copyright\ \number\year\ by John McCarthy}
C00011 00006	notes:
C00012 ENDMK
CāŠ—;
%elabor[w88,jmc]		Elaboration tolerance
\input memo.tex[let,jmc]
\title{NOTES ON ELABORATION TOLERANCE}

\noindent Abstract: This paper is a first attempt to treat the
concept of {\it elaboration tolerance} systematically.  The idea
is that the information in a database of common sense knowledge,
or any knowledge to be used by programs in situations calling
for common sense must be elaboratable in various ways.  Making
a formalism that permits such elaboration requires different
kinds of formalization than have been customary in science or
even in artificial intelligence.
\section{Introduction}

	A textbook of almost any science or of the philosophy of science
gives the following advice about constructing formal models.  Before
building the model decide what phenomena are to be taken into account and
what concepts are to be used.  In the case of probability or statistics,
the advice is to choose a sample space.  In case of logical formalization,
the advice is to choose a language, i.e.  the collection of predicate and
function symbols to be used.  Operations research and decision theory
involves the same informal stage before formal reasoning begins.
First of all, the process of deciding what phenomena to take into account
is one we would eventually like to program computers to do themselves.
Moreover, this advice is incompatible with constructing a database of
common sense knowledge that can be used by any program having to deal with
the phenomena to be described in the database.

	Some of those who despair of artificial intelligence seem to
be appealing to intuitions that formalization, whether it be in logic
or in some other sense, requires a commitment to a delimited system
of concepts that experience may require transcending.  The purpose of
this paper is to give some examples of the problem and propose a bag
of tricks for solving it, i.e. for making logical formalizations that
can transcend the knowledge that went into their formalization.  These
tricks include {\it nonmonotonic reasoning}, {\it contexts as objects}
and perhaps {\it mental situations}.  The goal is a formalization that
is {\it elaboration tolerant}, i.e. can be expanded to take new
phenomena into account.

	Here's the main idea.  The rest of the paper elaborates it.
Consider an axiom $p$.  Instead of putting $p$ in our database, we put
$holds(p,c)$ in the database, where $c$ is the name of a context.  The $c$
used in a general database is considered to represent what a user of the
database will ordinarily presume, both factually and linguistically.  For
example, it will assume that {\it penguin} is ordinarily the name of a
kind of bird and not a brand of cigarettes, and it will assume that
penguins ordinarily can't fly and can swim.  However, the inference
from the facts in the database to the conclusion that {\it penguin}
in a particular situation refers to penguins and that the penguins
in the situation can't fly is a nonmonotonic inference, e.g. is the
result of a suitable circumscription.  There are operations on
contexts that result in new contexts in which some assumptios of
the original context are modified.  We propose this as a model
of how a human transcends his own customary assumptions.  However,
as usual, we will concern ourselves in this paper with how the
idea can be made to work rather than with its ability to explain
the facts of psychology.  Our excuse for this is the usual one.
Devising mechanisms that work is preliminary to devising mechanisms
that fit human or animal behavior.  We aren't done yet with this
first stage.

	Of course, we are proposing to choose a definite language, and,
moreover, it may well be a first order language.  However, allowing
concepts as objects and very likely mental situations as objects
is intended to realize a {\it universal language}.  In what precise
sense the language is to be universal remains to be determined.

	Whether the above is a good idea depends on the details.
The details are given in the following sections.
\section{Examples of Requirements for Elaboration Tolerance}


\smallskip\centerline{Copyright \copyright\ \number\year\ by John McCarthy}
\smallskip\noindent{This draft of elab[w88,jmc]\ TEXed on \jmcdate\ at \theTime}
\vfill\eject\end
notes:
examples:
mother
mixing the use of a formalism that treats certain actions as discrete
with the ability to treat them as extended in time and refer to events
that occur while the action is being carried out.

queries: Which examples are mere abbreviations?  It is not a mere
abbreviation if the elaborated form doesn't exist in the beginning.